41 research outputs found

    Extending Modular Semantics for Bipolar Weighted Argumentation (Technical Report)

    Full text link
    Weighted bipolar argumentation frameworks offer a tool for decision support and social media analysis. Arguments are evaluated by an iterative procedure that takes initial weights and attack and support relations into account. Until recently, convergence of these iterative procedures was not very well understood in cyclic graphs. Mossakowski and Neuhaus recently introduced a unification of different approaches and proved first convergence and divergence results. We build up on this work, simplify and generalize convergence results and complement them with runtime guarantees. As it turns out, there is a tradeoff between semantics' convergence guarantees and their ability to move strength values away from the initial weights. We demonstrate that divergence problems can be avoided without this tradeoff by continuizing semantics. Semantically, we extend the framework with a Duality property that assures a symmetric impact of attack and support relations. We also present a Java implementation of modular semantics and explain the practical usefulness of the theoretical ideas

    Syntactic Reasoning with Conditional Probabilities in Deductive Argumentation

    Get PDF
    Evidence from studies, such as in science or medicine, often corresponds to conditional probability statements. Furthermore, evidence can conflict, in particular when coming from multiple studies. Whilst it is natural to make sense of such evidence using arguments, there is a lack of a systematic formalism for representing and reasoning with conditional probability statements in computational argumentation. We address this shortcoming by providing a formalization of conditional probabilistic argumentation based on probabilistic conditional logic. We provide a semantics and a collection of comprehensible inference rules that give different insights into evidence. We show how arguments constructed from proofs and attacks between them can be analyzed as arguments graphs using dialectical semantics and via the epistemic approach to probabilistic argumentation. Our approach allows for a transparent and systematic way of handling uncertainty that often arises in evidence

    SpArX: Sparse Argumentative Explanations for Neural Networks

    Full text link
    Neural networks (NNs) have various applications in AI, but explaining their decision process remains challenging. Existing approaches often focus on explaining how changing individual inputs affects NNs' outputs. However, an explanation that is consistent with the input-output behaviour of an NN is not necessarily faithful to the actual mechanics thereof. In this paper, we exploit relationships between multi-layer perceptrons (MLPs) and quantitative argumentation frameworks (QAFs) to create argumentative explanations for the mechanics of MLPs. Our SpArX method first sparsifies the MLP while maintaining as much of the original mechanics as possible. It then translates the sparse MLP into an equivalent QAF to shed light on the underlying decision process of the MLP, producing global and/or local explanations. We demonstrate experimentally that SpArX can give more faithful explanations than existing approaches, while simultaneously providing deeper insights into the actual reasoning process of MLPs

    Explaining Random Forests using Bipolar Argumentation and Markov Networks (Technical Report)

    Full text link
    Random forests are decision tree ensembles that can be used to solve a variety of machine learning problems. However, as the number of trees and their individual size can be large, their decision making process is often incomprehensible. In order to reason about the decision process, we propose representing it as an argumentation problem. We generalize sufficient and necessary argumentative explanations using a Markov network encoding, discuss the relevance of these explanations and establish relationships to families of abductive explanations from the literature. As the complexity of the explanation problems is high, we discuss a probabilistic approximation algorithm and present first experimental results.Comment: Accepted for presentation at AAAI 2023. Contains appendix with proofs and additional details about experiment
    corecore